35 research outputs found

    Non-Interactive Proofs: What Assumptions Are Sufficient?

    Get PDF
    A non-Interactive proof system allows a prover to convince a verifier that a statement is true by sending a single round of messages. In this thesis, we study under what assumptions can we build non-interactive proof systems with succinct verification and zero-knowledge. We obtain the following results. - Succinct Arguments: We construct the first non-interactive succinct arguments (SNARGs) for P from standard assumptions. Our construction is based on the polynomial hardness of Learning with Errors (LWE). - Zero-Knowledge: We build the first non-interactive zero-knowledge proof systems (NIZKs) for NP from sub-exponential Decisional Diffie-Hellman (DDH) assumption in the standard groups, without use of groups with pairings. To obtain our results, we build SNARGs for batch-NP from LWE and correlation intractable hash functions for TC^0 from sub-exponential DDH assumption, respectively, which may be of independent interest

    A comparative study of two molecular mechanics models based on harmonic potentials

    Full text link
    We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials since all parameters of the beam model are obtained from the harmonic potentials. We demonstrate this difference for finite width graphene nanoribbons and a single polyethylene chain comparing results of the molecular dynamics (MD) simulations with harmonic potentials and the finite element method with the beam model. We also find that the difference strongly depends on the loading modes, chirality and width of the graphene nanoribbons, and it increases with decreasing width of the nanoribbons under pure bending condition. The maximum difference of the predicted mechanical properties using the two models can exceed 300% in different loading modes. Comparing the two models with the MD results of AIREBO potential, we find that the stick-spiral model overestimates and the beam model underestimates the mechanical properties in narrow armchair graphene nanoribbons under pure bending condition.Comment: 40 pages, 21 figure

    Optimal Key Consensus in Presence of Noise

    Get PDF
    In this work, we abstract some key ingredients in previous key exchange protocols based on LWE and its variants, by introducing and formalizing the building tool, referred to as key consensus (KC) and its asymmetric variant AKC. KC and AKC allow two communicating parties to reach consensus from close values obtained by some secure information exchange. We then discover upper bounds on parameters for any KC and AKC. KC and AKC are fundamental to lattice based cryptography, in the sense that a list of cryptographic primitives based on LWE and its variants (including key exchange, public-key encryption, and more) can be modularly constructed from them. As a conceptual contribution, this much simplifies the design and analysis of these cryptosystems in the future. We then design and analyze both general and highly practical KC and AKC schemes, which are referred to as OKCN and AKCN respectively for presentation simplicity. Based on KC and AKC, we present generic constructions of key exchange (KE) from LWR, LWE, RLWE and MLWE. The generic construction allows versatile instantiations with our OKCN and AKCN schemes, for which we elaborate on evaluating and choosing the concrete parameters in order to achieve a well-balanced performance among security, computational cost, bandwidth efficiency, error rate, and operation simplicity

    Indistinguishability Obfuscation via Mathematical Proofs of Equivalence

    Get PDF
    Over the last decade, indistinguishability obfuscation (iO) has emerged as a seemingly omnipotent primitive in cryptography. Moreover, recent breakthrough work has demonstrated that iO can be realized from well-founded assumptions. A thorn to all this remarkable progress is a limitation of all known constructions of general-purpose iO: the security reduction incurs a loss that is exponential in the input length of the function. This ``input-length barrier\u27\u27 to iO stems from the non-falsifiability of the iO definition and is discussed in folklore as being possibly inherent. It has many negative consequences; notably, constructing iO for programs with inputs of unbounded length remains elusive due to this barrier. We present a new framework aimed towards overcoming the input-length barrier. Our approach relies on short mathematical proofs of functional equivalence of circuits (and Turing machines) to avoid the brute-force ``input-by-input\u27\u27 check employed in prior works. - We show how to obfuscate circuits that have efficient proofs of equivalence in Propositional Logic with a security loss independent of input length. - Next, we show how to obfuscate Turing machines with unbounded length inputs, whose functional equivalence can be proven in Cook\u27s Theory PVPV. - Finally, we demonstrate applications of our results to succinct non-interactive arguments and witness encryption, and provide guidance on using our techniques for building new applications. To realize our approach, we depart from prior work and develop a new gate-by-gate obfuscation template that preserves the topology of the input circuit

    Pre-Constrained Encryption

    Get PDF
    In all existing encryption systems, the owner of the master secret key has the ability to decrypt all ciphertexts. In this work, we propose a new notion of pre-constrained encryption (PCE) where the owner of the master secret key does not have "full" decryption power. Instead, its decryption power is constrained in a pre-specified manner during the system setup. We present formal definitions and constructions of PCE, and discuss societal applications and implications to some well-studied cryptographic primitives

    Block Edit Errors with Transpositions: Deterministic Document Exchange Protocols and Almost Optimal Binary Codes

    Get PDF
    Document exchange and error correcting codes are two fundamental problems regarding communications. In the first problem, Alice and Bob each holds a string, and the goal is for Alice to send a short sketch to Bob, so that Bob can recover Alice\u27s string. In the second problem, Alice sends a message with some redundant information to Bob through a channel that can add adversarial errors, and the goal is for Bob to correctly recover the message despite the errors. In both problems, an upper bound is placed on the number of errors between the two strings or that the channel can add, and a major goal is to minimize the size of the sketch or the redundant information. In this paper we focus on deterministic document exchange protocols and binary error correcting codes. Both problems have been studied extensively. In the case of Hamming errors (i.e., bit substitutions) and bit erasures, we have explicit constructions with asymptotically optimal parameters. However, other error types are still rather poorly understood. In a recent work [Kuan Cheng et al., 2018], the authors constructed explicit deterministic document exchange protocols and binary error correcting codes for edit errors with almost optimal parameters. Unfortunately, the constructions in [Kuan Cheng et al., 2018] do not work for other common errors such as block transpositions. In this paper, we generalize the constructions in [Kuan Cheng et al., 2018] to handle a much larger class of errors. These include bursts of insertions and deletions, as well as block transpositions. Specifically, we consider document exchange and error correcting codes where the total number of block insertions, block deletions, and block transpositions is at most k <= alpha n/log n for some constant 0<alpha<1. In addition, the total number of bits inserted and deleted by the first two kinds of operations is at most t <= beta n for some constant 0<beta<1, where n is the length of Alice\u27s string or message. We construct explicit, deterministic document exchange protocols with sketch size O((k log n +t) log^2 n/{k log n + t}) and explicit binary error correcting code with O(k log n log log log n+t) redundant bits. As a comparison, the information-theoretic optimum for both problems is Theta(k log n+t). As far as we know, previously there are no known explicit deterministic document exchange protocols in this case, and the best known binary code needs Omega(n) redundant bits even to correct just one block transposition [L. J. Schulman and D. Zuckerman, 1999]

    AKCN-E8: Compact and Flexible KEM from Ideal Lattice

    Get PDF
    A remarkable breakthrough in mathematics in recent years is the proof of the long-standing conjecture: sphere packing (i.e., packing unit balls) in the E8E_8 lattice is optimal in the sense of the best density \cite{V17} for sphere packing in R8\mathbb{R}^8. In this work, based on the E8E_8 lattice code, we design a mechanism for asymmetric key consensus from noise (AKCN), referred to as AKCN-E8, for error correction and key consensus. As a direct application of the AKCN-E8 code, we present highly practical key encapsulation mechanism (KEM) from the ideal lattice based on the ring learning with errors (RLWE) problem. Compared to the RLWE-based NewHope-KEM \cite{newhope-NIST}, which is a variant of NewHope-Usenix \cite{newhope15} and is now a promising candidate in the second round of NIST post-quantum cryptography (PQC) standardization competition, our AKCN-E8-KEM has the following advantages: * The size of shared-key is doubled.. * More compact ciphertexts, at the same or even higher security level. * More flexible parameter selection for tradeoffs among security, ciphertext size and error probability

    Linear Insertion Deletion Codes in the High-Noise and High-Rate Regimes

    Get PDF
    This work continues the study of linear error correcting codes against adversarial insertion deletion errors (insdel errors). Previously, the work of Cheng, Guruswami, Haeupler, and Li [Kuan Cheng et al., 2021] showed the existence of asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, or achieve rate arbitrarily close to 1/2 even over the binary alphabet. As shown in [Kuan Cheng et al., 2021], these bounds are also the best possible. However, known explicit constructions in [Kuan Cheng et al., 2021], and subsequent improved constructions by Con, Shpilka, and Tamo [Con et al., 2022] all fall short of meeting these bounds. Over any constant size alphabet, they can only achieve rate < 1/8 or correct < 1/4 fraction of errors; over the binary alphabet, they can only achieve rate < 1/1216 or correct < 1/54 fraction of errors. Apparently, previous techniques face inherent barriers to achieve rate better than 1/4 or correct more than 1/2 fraction of errors. In this work we give new constructions of such codes that meet these bounds, namely, asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, and binary asymptotically good linear insdel codes that can achieve rate arbitrarily close to 1/2. All our constructions are efficiently encodable and decodable. Our constructions are based on a novel approach of code concatenation, which embeds the index information implicitly into codewords. This significantly differs from previous techniques and may be of independent interest. Finally, we also prove the existence of linear concatenated insdel codes with parameters that match random linear codes, and propose a conjecture about linear insdel codes
    corecore